Machine learning models rely on various assumptions to attain high accuracy. One of the preliminary assumptions of these models is the independent and identical distribution, which suggests that the train and test data are sampled from the same distribution. However, this assumption seldom holds in the real world due to distribution shifts. As a result models that rely on this assumption exhibit poor generalization capabilities. Over the recent years, dedicated efforts have been made to improve the generalization capabilities of these models collectively known as -- \textit{domain generalization methods}. The primary idea behind these methods is to identify stable features or mechanisms that remain invariant across the different distributions. Many generalization approaches employ causal theories to describe invariance since causality and invariance are inextricably intertwined. However, current surveys deal with the causality-aware domain generalization methods on a very high-level. Furthermore, we argue that it is possible to categorize the methods based on how causality is leveraged in that method and in which part of the model pipeline is it used. To this end, we categorize the causal domain generalization methods into three categories, namely, (i) Invariance via Causal Data Augmentation methods which are applied during the data pre-processing stage, (ii) Invariance via Causal representation learning methods that are utilized during the representation learning stage, and (iii) Invariance via Transferring Causal mechanisms methods that are applied during the classification stage of the pipeline. Furthermore, this survey includes in-depth insights into benchmark datasets and code repositories for domain generalization methods. We conclude the survey with insights and discussions on future directions.
translated by 谷歌翻译
在线社交网络(OSN)有助于访问各种数据,使研究人员能够分析用户的行为并开发用户行为分析模型。这些模型在很大程度上依赖于观察到的数据,这些数据通常由于参与不平等而产生偏差。这种不平等由三组在线用户组成:潜伏者 - 仅消耗内容的用户,招聘者 - 对内容创建的用户和贡献者很少贡献 - 负责创建大多数在线内容的用户。在解释人口水平的利益或情感的同时,未能考虑所有群体的贡献,可能会产生偏见的结果。为了减少贡献者引起的偏见,在这项工作中,我们专注于强调参与者在观察到的数据中的贡献,因为与潜伏者相比,它们更有可能贡献,与贡献者相比,它们的人口更大。这些用户行为分析的第一步是找到他们接触但没有互动的主题。为此,我们提出了一个新颖的框架,有助于识别这些用户并估算其主题曝光。暴露估计机制是通过合并来自类似贡献者的行为模式以及用户的人口统计学和个人资料信息来建模的。
translated by 谷歌翻译
尽管Covid-19疫苗对病毒取得了惊人的成功,但很大一部分人口仍然不愿接受疫苗接种,这破坏了政府控制该病毒的努力。为了解决这个问题,我们需要了解导致这种行为的不同因素,包括社交媒体话语,新闻媒体宣传,政府的回应,人口统计和社会经济地位以及COVID-19统计等等。涵盖所有这些方面,使得在推断疫苗犹豫的问题时很难形成完整的情况。在本文中,我们构建了一个多源,多模式和多功能在线数据存储库Covaxnet。我们提供描述性分析和见解,以说明Covaxnet中的关键模式。此外,我们提出了一种新颖的方法来连接在线和离线数据,以促进利用互补信息源的推理任务。
translated by 谷歌翻译
Previous work has shown the potential of deep learning to predict renal obstruction using kidney ultrasound images. However, these image-based classifiers have been trained with the goal of single-visit inference in mind. We compare methods from video action recognition (i.e. convolutional pooling, LSTM, TSM) to adapt single-visit convolutional models to handle multiple visit inference. We demonstrate that incorporating images from a patient's past hospital visits provides only a small benefit for the prediction of obstructive hydronephrosis. Therefore, inclusion of prior ultrasounds is beneficial, but prediction based on the latest ultrasound is sufficient for patient risk stratification.
translated by 谷歌翻译
Abusive language is a concerning problem in online social media. Past research on detecting abusive language covers different platforms, languages, demographies, etc. However, models trained using these datasets do not perform well in cross-domain evaluation settings. To overcome this, a common strategy is to use a few samples from the target domain to train models to get better performance in that domain (cross-domain few-shot training). However, this might cause the models to overfit the artefacts of those samples. A compelling solution could be to guide the models toward rationales, i.e., spans of text that justify the text's label. This method has been found to improve model performance in the in-domain setting across various NLP tasks. In this paper, we propose RAFT (Rationale Adaptor for Few-shoT classification) for abusive language detection. We first build a multitask learning setup to jointly learn rationales, targets, and labels, and find a significant improvement of 6% macro F1 on the rationale detection task over training solely rationale classifiers. We introduce two rationale-integrated BERT-based architectures (the RAFT models) and evaluate our systems over five different abusive language datasets, finding that in the few-shot classification setting, RAFT-based models outperform baseline models by about 7% in macro F1 scores and perform competitively to models finetuned on other source domains. Furthermore, RAFT-based models outperform LIME/SHAP-based approaches in terms of plausibility and are close in performance in terms of faithfulness.
translated by 谷歌翻译
Humans have perfected the art of learning from multiple modalities through sensory organs. Despite their impressive predictive performance on a single modality, neural networks cannot reach human level accuracy with respect to multiple modalities. This is a particularly challenging task due to variations in the structure of respective modalities. Conditional Batch Normalization (CBN) is a popular method that was proposed to learn contextual features to aid deep learning tasks. This technique uses auxiliary data to improve representational power by learning affine transformations for convolutional neural networks. Despite the boost in performance observed by using CBN layers, our work reveals that the visual features learned by introducing auxiliary data via CBN deteriorates. We perform comprehensive experiments to evaluate the brittleness of CBN networks to various datasets, suggesting that learning from visual features alone could often be superior for generalization. We evaluate CBN models on natural images for bird classification and histology images for cancer type classification. We observe that the CBN network learns close to no visual features on the bird classification dataset and partial visual features on the histology dataset. Our extensive experiments reveal that CBN may encourage shortcut learning between the auxiliary data and labels.
translated by 谷歌翻译
在本文中,我们介绍了计算机视觉研讨会上的女性 - WICV 2022,与路易斯安那州新奥尔良的混合CVPR 2022一起组织。它为计算机视觉社区中的少数(女性)群体提供了声音,并着重于提高这些研究人员在学术界和工业中的可见性。 WICV认为,这样的事件可以在降低计算机视觉领域的性别失衡方面发挥重要作用。 WICV每年都会组织a)a)从少数群体的研究人员之间合作的机会,b)指导女性初级研究人员,c)向演示者提供财政支持,以克服货币负担,D)榜样的大量选择,他们可以在职业生涯开始时,是年轻研究人员的例子。在本文中,我们介绍了有关研讨会计划的报告,过去几年的趋势,关于WICV 2022讲习班的演示者,与会者和赞助的统计摘要。
translated by 谷歌翻译
大气效应(例如湍流和背景热噪声)抑制了在开关键控自由空间光学通信中使用的相干光的传播。在这里,我们介绍并实验验证了卷积神经网络,以降低后处理中自由空间光学通信的位错误率,而自由空间光学通信的位比基于高级光学器件的现有解决方案明显简单,更便宜。我们的方法由两个神经网络组成,这是第一个确定在热噪声和湍流中存在相干位序列以及第二个解调相干位序列的存在。通过生成连贯的光线,将它们与热灯结合在一起,并通过湍流的水箱将其结合起来,通过生成开关的键入键流,可以通过实验获得我们网络的所有数据,从而获得了模拟的湍流,并将其传递给了最终的光线。高度准确性。我们的卷积神经网络提高了与阈值分类方案相比的检测准确性,并具有与当前解调和误差校正方案集成的能力。
translated by 谷歌翻译
手机等边缘设备上的微调模型将对敏感数据实现隐私的个性化。但是,在历史上,Edge训练仅限于具有简单体系结构的相对较小的模型,因为训练既是记忆力和能量密集型的。我们提出了诗人,这是一种算法,可以在存储器筛分的边缘设备上训练大型神经网络。诗人共同优化了重新布置和分页的综合搜索搜索空间,这两种算法可减少返回传播的记忆消耗。鉴于记忆预算和运行时间的限制,我们制定了一项混合成员线性计划(MILP),以进行最佳培训。我们的方法使培训能够在嵌入式设备上显着更大的模型,同时减少能源消耗,同时不修改反向传播的数学正确性。我们证明,可以在皮质类嵌入式设备的内存约束中微调RESNET-18和BERT,同时在能源效率方面的当前边缘训练方法的表现。诗人是一个开源项目,网址为https://github.com/shishirpatil/poet
translated by 谷歌翻译
不依赖虚假相关性的学习预测因素涉及建立因果关系。但是,学习这样的表示非常具有挑战性。因此,我们制定了从高维数据中学习因果表示的问题,并通过合成数据研究因果恢复。这项工作引入了贝叶斯因果发现的潜在变量解码器模型BCD,并在轻度监督和无监督的环境中进行实验。我们提出了一系列合成实验,以表征因果发现的重要因素,并表明将已知的干预靶标用作标签有助于无监督的贝叶斯推断,对线性高斯添加噪声潜在结构性因果模型的结构和参数。
translated by 谷歌翻译